Goto

Collaborating Authors

 correctness and sample complexity


On the Correctness and Sample Complexity of Inverse Reinforcement Learning

Neural Information Processing Systems

Inverse reinforcement learning (IRL) is the problem of finding a reward function that generates a given optimal policy for a given Markov Decision Process. This paper looks at an algorithmic-independent geometric analysis of the IRL problem with finite states and actions. A L1-regularized Support Vector Machine formulation of the IRL problem motivated by the geometric analysis is then proposed with the basic objective of the inverse reinforcement problem in mind: to find a reward function that generates a specified optimal policy. The paper further analyzes the proposed formulation of inverse reinforcement learning with $n$ states and $k$ actions, and shows a sample complexity of $O(d^2 \log (nk))$ for transition probability matrices with at most $d$ non-zeros per row, for recovering a reward function that generates a policy that satisfies Bellman's optimality condition with respect to the true transition probabilities.


Reviews: On the Correctness and Sample Complexity of Inverse Reinforcement Learning

Neural Information Processing Systems

This work introduces a geometric analysis of the problem of inverse reinforcement learning (IRL) and proposes a formal guarantee for the optimality of the reward function, obtained from the empirical data. The authors also provide the sample complexity for their proposed l1-regularized Support Vector Machine formulation. In general, this is an interesting work with a significant contribution to the theoretical aspect of the inverse reinforcement learning problem. However, there are a few concerns that need to be addressed: Major: 1. The paper does not define the problem as a stand-alone question in the field. The problem formulation heavily relies on the previous work by Ng & Russel (2000) and is written only as a follow up to this work.


Reviews: On the Correctness and Sample Complexity of Inverse Reinforcement Learning

Neural Information Processing Systems

After discussions among the reviewers, they all agreed that the paper has enough theoretical contributions. They were also very happy with rebuttal and the answers provided by the authors. Please incorporate the suggested comments and references in the final version.


On the Correctness and Sample Complexity of Inverse Reinforcement Learning

Neural Information Processing Systems

Inverse reinforcement learning (IRL) is the problem of finding a reward function that generates a given optimal policy for a given Markov Decision Process. This paper looks at an algorithmic-independent geometric analysis of the IRL problem with finite states and actions. A L1-regularized Support Vector Machine formulation of the IRL problem motivated by the geometric analysis is then proposed with the basic objective of the inverse reinforcement problem in mind: to find a reward function that generates a specified optimal policy. The paper further analyzes the proposed formulation of inverse reinforcement learning with n states and k actions, and shows a sample complexity of O(d 2 \log (nk)) for transition probability matrices with at most d non-zeros per row, for recovering a reward function that generates a policy that satisfies Bellman's optimality condition with respect to the true transition probabilities.


On the Correctness and Sample Complexity of Inverse Reinforcement Learning

Komanduru, Abi, Honorio, Jean

Neural Information Processing Systems

Inverse reinforcement learning (IRL) is the problem of finding a reward function that generates a given optimal policy for a given Markov Decision Process. This paper looks at an algorithmic-independent geometric analysis of the IRL problem with finite states and actions. A L1-regularized Support Vector Machine formulation of the IRL problem motivated by the geometric analysis is then proposed with the basic objective of the inverse reinforcement problem in mind: to find a reward function that generates a specified optimal policy. The paper further analyzes the proposed formulation of inverse reinforcement learning with $n$ states and $k$ actions, and shows a sample complexity of $O(d 2 \log (nk))$ for transition probability matrices with at most $d$ non-zeros per row, for recovering a reward function that generates a policy that satisfies Bellman's optimality condition with respect to the true transition probabilities. Papers published at the Neural Information Processing Systems Conference.